A continuous Newton-type method for unconstrained optimization

نویسندگان

  • Lei-Hong Zhang
  • C. T. Kelley
  • Li-Zhi Liao
چکیده

In this paper, we propose a continuous Newton-type method in the form of an ordinary differential equation by combining the negative gradient and Newton’s direction. It is shown that for a general function f(x), our method converges globally to a connected subset of the stationary points of f(x) under some mild conditions; and converges globally to a single stationary point for a real analytic function. The method reduces to the exact continuous Newton method if the Hessian matrix of f(x) is positive definite. The convergence of the new method on the set of standard test problems in the literature are also reported.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems

In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...

متن کامل

On the Behavior of Damped Quasi-Newton Methods for Unconstrained Optimization

We consider a family of damped quasi-Newton methods for solving unconstrained optimization problems. This family resembles that of Broyden with line searches, except that the change in gradients is replaced by a certain hybrid vector before updating the current Hessian approximation. This damped technique modifies the Hessian approximations so that they are maintained sufficiently positive defi...

متن کامل

An efficient improvement of the Newton method for solving nonconvex optimization problems

‎Newton method is one of the most famous numerical methods among the line search‎ ‎methods to minimize functions. ‎It is well known that the search direction and step length play important roles ‎in this class of methods to solve optimization problems. ‎In this investigation‎, ‎a new modification of the Newton method to solve ‎unconstrained optimization problems is presented‎. ‎The significant ...

متن کامل

A limited memory adaptive trust-region approach for large-scale unconstrained optimization

This study concerns with a trust-region-based method for solving unconstrained optimization problems. The approach takes the advantages of the compact limited memory BFGS updating formula together with an appropriate adaptive radius strategy. In our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-Newt...

متن کامل

A Modified Regularized Newton Method for Unconstrained Nonconvex Optimization

In this paper, we present a modified regularized Newton method for the unconstrained nonconvex optimization by using trust region technique. We show that if the gradient and Hessian of the objective function are Lipschitz continuous, then the modified regularized Newton method (M-RNM) has a global convergence property. Numerical results show that the algorithm is very efficient.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007